Scaling Relationships in Back-propagation Learning

نویسندگان

  • Gerald Tesauro
  • Bob Janssens
چکیده

A bstrac t. We present an empirical st udy of th e required training time for neural networks to learn to compute the parity function using the back -propagation learning algorithm, as a function of t he numb er of inp uts. The parity funct ion is a Boolean predica te whose order is equal to th e number of inpu t s. \Ve find t hat t he t rain ing time behaves roughly as 4" I where n is the num ber of inp ut s, for values of n between 2 and 8. T his is consistent with recent t heoretical analyses of similar algorit hms. As a part of thi s stu dy we sea rched for optimal par ameter tunings for each value of n. We suggest that the learning rate should decrease faster than lin, the momentum coefficient should approach 1 exponentially, and the initial random weight scale should remain approximately constant.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning in cortical networks through error back-propagation

To efficiently learn from feedback, the cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error back-propagation. It has been successfully used in both machine learning and modelling of the brain’s cognitive functions. However, in the back-propagation algorithm, ...

متن کامل

A Numerical Study on Learning Curves in Stochastic

The universal asymptotic scaling laws proposed by Amari et al. are studied in large scale simulations using a CM5. Small stochastic multi-layer feed-forward networks trained with back-propagation are investigated. In the range of a large number of training patterns t, the asymptotic generalization error scales as 1=t as predicted. For a medium range t a faster 1=t 2 scaling is observed. This ee...

متن کامل

Learning sets of filters using back-propagation

A learning procedure, called back-propagation, for layered networks of deterministic, neuron-like units has been described previously. The ability of the procedure automatically to discover useful internal representations makes it a powerful tool for attacking difficult problems like speech recognition. This paper describes further research on the learning procedure and presents an example in w...

متن کامل

Semi-Supervised Learning Based Prediction of Musculoskeletal Disorder Risk

This study explores a semi-supervised classification approach using random forest as a base classifier to classify the low-back disorders (LBDs) risk associated with the industrial jobs. Semi-supervised classification approach uses unlabeled data together with the small number of labelled data to create a better classifier. The results obtained by the proposed approach are compared with those o...

متن کامل

Learning Generative ConvNet with Continuous Latent Factors by Alternating Back-Propagation

The supervised learning of the discriminative convolutional neural network (ConvNet or CNN) is powered by back-propagation on the parameters. In this paper, we show that the unsupervised learning of a popular top-down generative ConvNet model with latent continuous factors can be accomplished by a learning algorithm that consists of alternatively performing back-propagation on both the latent f...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Complex Systems

دوره 2  شماره 

صفحات  -

تاریخ انتشار 1988